Fisher Consistency for Prior Probability Shift

نویسنده

  • Dirk Tasche
چکیده

We introduce Fisher consistency in the sense of unbiasedness as a criterion to distinguish potentially suitable and unsuitable estimators of prior class probabilities in test datasets under prior probability and more general dataset shift. The usefulness of this unbiasedness concept is demonstrated with three examples of classifiers used for quantification: Adjusted Classify & Count, EM-algorithm and CDE-Iterate. We find that Adjusted Classify & Count and EMalgorithm are Fisher consistent. A counter-example shows that CDE-Iterate is not Fisher consistent and, therefore, cannot be trusted to deliver reliable estimates of class probabilities.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fermat, Schubert, Einstein, and Behrens-Fisher: The Probable Difference Between Two Means When Ï...12⛀ Ï...22

The history of the Behrens-Fisher problem and some approximate solutions are reviewed. In outlining relevant statistical hypotheses on the probable difference between two means, the importance of the Behrens-Fisher problem from a theoretical perspective is acknowledged, but it is concluded that this problem is irrelevant for applied research in psychology, education, and related disciplines. Th...

متن کامل

Bayesian Estimation of Shift Point in Shape Parameter of Inverse Gaussian Distribution Under Different Loss Functions

In this paper, a Bayesian approach is proposed for shift point detection in an inverse Gaussian distribution. In this study, the mean parameter of inverse Gaussian distribution is assumed to be constant and shift points in shape parameter is considered. First the posterior distribution of shape parameter is obtained. Then the Bayes estimators are derived under a class of priors and using variou...

متن کامل

Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds.

This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample si...

متن کامل

Limiting behavior of relative Rényi entropy in a non-regular location shift family

In a regular distribution family, Cramér-Rao inequality holds, and the maximum likelihood estimator (MLE) converges to a normal distribution whose variance is the inverse of the Fisher information because the Fisher information converges and is well-defined in this family. However, in a non-regular location shift family which is generated by a distribution of R whose support is not R (e.g., a W...

متن کامل

Implicit encoding of prior probabilities in optimal neural populations

Optimal coding provides a guiding principle for understanding the representation of sensory variables in neural populations. Here we consider the influence of a prior probability distribution over sensory variables on the optimal allocation of neurons and spikes in a population. We model the spikes of each cell as samples from an independent Poisson process with rate governed by an associated t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Journal of Machine Learning Research

دوره 18  شماره 

صفحات  -

تاریخ انتشار 2017